Beta-binomial distribution

Probability mass function
Cumulative distribution function
Parameters nN0 — number of trials
\alpha > 0 (real)
\beta > 0 (real)
Support k ∈ { 0, …, n }
PMF {n\choose k}\frac{\mathrm{B}(k%2B\alpha,n-k%2B\beta)} {\mathrm{B}(\alpha,\beta)}\!
CDF 1- \tfrac{\mathrm{B}(\beta%2Bn-k-1,\alpha%2Bk%2B1)_3F_2(\boldsymbol{a},\boldsymbol{b};k)} {\mathrm{B}(\alpha,\beta)\mathrm{B}(n-k,k%2B2) (n%2B1)}

where 3F2(a,b,k) is the generalized hypergeometric function
=3F2(1, α + k + 1, −n + k + 1; k + 2, −β − n + k + 2; 1)
Mean \frac{n\alpha}{\alpha%2B\beta}\!
Variance \frac{n\alpha\beta(\alpha%2B\beta%2Bn)}{(\alpha%2B\beta)^2(\alpha%2B\beta%2B1)}\!
Skewness \tfrac{(\alpha%2B\beta%2B2n)(\beta-\alpha)}{(\alpha%2B\beta%2B2)}\sqrt{\tfrac{1%2B\alpha%2B\beta}{n\alpha\beta(n%2B\alpha%2B\beta)}}\!
Ex. kurtosis See text
MGF _{2}F_{1}(-n,\alpha;\alpha%2B\beta;1-e^{t})\!
 \text{for } t<\log_e(2)
CF _{2}F_{1}(-n,\alpha;\alpha%2B\beta;1-e^{it})\!
\text{for } |t|<\log_e(2)

In probability theory and statistics, the beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics as an overdispersed binomial distribution.

It reduces to the Bernoulli distribution as a special case when n = 1. For α = β = 1, it is the discrete uniform distribution from 0 to n. It also approximates the binomial distribution arbitrarily well for large α and β. The beta-binomial is a one-dimensional version of the multivariate Pólya distribution, as the binomial and beta distributions are special cases of the multinomial and Dirichlet distributions, respectively.

Contents

Motivation and derivation

Beta-binomial distribution as a compound distribution

The Beta distribution is a conjugate distribution of the binomial distribution. This fact leads to an analytically tractable compound distribution where one can think of the  p parameter in the binomial distribution as being randomly drawn from a beta distribution. Namely, if


 \begin{align} L(k|p) & = \operatorname{Bin}(k,p) \\

                           & = {n\choose k}p^k(1-p)^{n-k}
 \end{align}

is the binomial distribution where p is a random variable with a beta distribution


 \begin{align} \pi(p|\alpha,\beta) & = \mathrm{Beta}(\alpha,\beta) \\

                                        & = \frac{p^{\alpha-1} (1-p)^{\beta-1}}
                                                 {\mathrm{B}(\alpha,\beta)}  
 \end{align}

then the compound distribution is given by


 \begin{align}  f(k|\alpha,\beta) & = \int_0^1 L(k|p)\pi(p|\alpha, \beta) \, dp \\

                           & = {n\choose k}\frac{1}
                                    {\mathrm{B}(\alpha,\beta)}
                               
                               \int_0^1 p^{k%2B\alpha-1}(1-p)^{n-k%2B\beta-1} \, dp \\
                           & = {n\choose k}\frac{\mathrm{B}(k%2B\alpha,n-k%2B\beta)} {\mathrm{B}(\alpha,\beta)}. 
 \end{align}

Using the properties of the beta function, this can alternatively be written


   f(k|\alpha,\beta) = \frac{\Gamma(n%2B1)}{\Gamma(k%2B1)\Gamma(n-k%2B1)} \frac{\Gamma(\alpha%2Bk)\Gamma(n%2B\beta-k)}{\Gamma(\alpha%2B\beta%2Bn)} 
                         \frac{\Gamma(\alpha%2B\beta)}{\Gamma(\alpha)\Gamma(\beta)}.

It is within this context that the beta-binomial distribution appears often in Bayesian statistics: the beta-binomial is the predictive distribution of a binomial random variable with a beta distribution prior on the success probability.

Beta-binomial as an urn model

The beta-binomial distribution can also be motivated via an urn model for positive integer values of α and β. Specifically, imagine an urn containing α red balls and β black balls, where random draws are made. If a red ball is observed, then two red balls are returned to the urn. Likewise, if a black ball is drawn, it is replaced and another black ball is added to the urn. If this is repeated n times, then the probability of observing k red balls follows a beta-binomial distribution with parameters n,α and β.

Note that if the random draws are with simple replacement (no balls over and above the observed ball are added to the urn), then the distribution follows a binomial distribution and if the random draws are made without replacement, the distribution follows a hypergeometric distribution.

Moments and properties

The first three raw moments are

 
 \begin{align} 
   \mu_1 & =\frac{n\alpha}{\alpha%2B\beta} \\[8pt]
   \mu_2 & =\frac{n\alpha[n(1%2B\alpha)%2B\beta]}{(\alpha%2B\beta)(1%2B\alpha%2B\beta)}\\[8pt]
   \mu_3 & =\frac{n\alpha[n^{2}(1%2B\alpha)(2%2B\alpha)%2B3n(1%2B\alpha)\beta%2B\beta(\beta-\alpha)]}{(\alpha%2B\beta)(1%2B\alpha%2B\beta)(2%2B\alpha%2B\beta)}
 \end{align}

and the kurtosis is

 
   \gamma_2 = \frac{(\alpha %2B \beta)^2 (1%2B\alpha%2B\beta)}{n \alpha \beta( \alpha %2B \beta %2B 2)(\alpha %2B \beta %2B 3)(\alpha %2B \beta %2B n) } \left[ (\alpha %2B \beta)(\alpha %2B \beta - 1 %2B 6n) %2B 3 \alpha\beta(n - 2) %2B 6n^2 -\frac{3\alpha\beta n(6-n)}{\alpha %2B \beta} - \frac{18\alpha\beta n^{2}}{(\alpha%2B\beta)^2} \right].

Letting \pi=\frac{\alpha}{\alpha%2B\beta} \! we note, suggestively, that the mean can be written as


\mu = \frac{n\alpha}{\alpha%2B\beta}=n\pi
\!

and the variance as


\sigma^2 = \frac{n\alpha\beta(\alpha%2B\beta%2Bn)}{(\alpha%2B\beta)^2(\alpha%2B\beta%2B1)}
 = n\pi(1-\pi) \frac{\alpha %2B \beta %2B n}{\alpha %2B \beta %2B 1} = n\pi(1-\pi)[1%2B(n-1)\rho]
\!

where  \rho= \tfrac{1}{\alpha%2B\beta%2B1}   \! is the pairwise correlation between the n Bernoulli draws and is called the over-dispersion parameter.

Point estimates

Method of moments

The method of moments estimates can be gained by noting the first and second moments of the beta-binomial namely

 
 \begin{align} 
   \mu_1 & =\frac{n\alpha}{\alpha%2B\beta} \\
  \mu_2 & =\frac{n\alpha[n(1%2B\alpha)%2B\beta]}{(\alpha%2B\beta)(1%2B\alpha%2B\beta)}
 \end{align}

and setting these raw moments equal to the sample moments

 
 \begin{align}
   \hat{\mu}_1 & = m_1 \\
  \hat{\mu}_2 & =m_2
 \end{align}

and solving for α and β we get


 \begin{align} 
   \hat{\alpha} & =\frac{nm_1-m_2}{n(\frac{m_2}{m_1}-m_1-1)%2Bm_1} \\
   \hat{\beta} & =\frac{(n-m_1)(n-\frac{m_2}{m_1})}{n(\frac{m_2}{m_1}-m_1 - 1)%2Bm_1}.
 \end{align}

Note that these estimates can be non-sensically negative which is evidence that the data is either undispersed or underdispersed relative to the binomial distribution. In this case, the binomial distribution and the hypergeometric distribution are alternative candidates respectively.

Maximum likelihood estimation

While closed-form maximum likelihood estimates are impractical, given that the pdf consists of common functions (gamma function and/or Beta functions), they can be easily found via direct numerical optimization. Maximum likelihood estimates from empirical data can be computed using general methods for fitting multinomial Pólya distributions, methods for which are described in (Minka 2003). The R package VGAM through the function vglm, via maximum likelihood, facilitates the fitting of glm type models with responses distributed according to the beta-binomial distribution. Note also that there is no requirement that n is fixed throughout the observations.

Example

The following data gives the number of male children among the first 12 children of family size 13 in 6115 families taken from hospital records in 19th century Saxony (Sokal and Rohlf, p. 59 from Lindsey). The 13th child is ignored to assuage the effect of families non-randomly stopping when a desired gender is reached.

Males 0 1 2 3 4 5 6 7 8 9 10 11 12
Families 3 24 104 286 670 1033 1343 1112 829 478 181 45 7

We note the first two sample moments are

 
 \begin{align} 
   m_1 & = 6.23\\
   m_2 & = 42.31 \\
     n & = 12
 \end{align}

and therefore the method of moments estimates are

 
 \begin{align} 
   \hat{\alpha} & = 34.1350\\
   \hat{\beta} & = 31.6085.
 \end{align}

The maximum likelihood estimates can be found numerically

 
 \begin{align} 
   \hat\alpha_\mathrm{mle} & = 34.09558\\
   \hat\beta_\mathrm{mle} & = 31.5715
 \end{align}

and the maximized log-liklihood is


   \log \mathcal{L} = -12492.9

from which we find the AIC


   \mathit{AIC}=24989.74.

The AIC for the competing binomial model is AIC = 25070.34 and thus we see that the beta-binomial model provides a superior fit to the data i.e. there is evidence for overdispersion. Trivers and Willard posit a theoretical justification for heterogeneity in gender-proneness among families (i.e. overdispersion).

The superior fit is evident especially among the tails

Males 0 1 2 3 4 5 6 7 8 9 10 11 12
Observed Families 3 24 104 286 670 1033 1343 1112 829 478 181 45 7
Predicted (Beta-Binomial) 2.3 22.6 104.8 310.9 655.7 1036.2 1257.9 1182.1 853.6 461.9 177.9 43.8 5.2
Predicted (Binomial p = 0.519215) 0.9 12.1 71.8 258.5 628.1 1085.2 1367.3 1265.6 854.2 410.0 132.8 26.1 2.3

Further Bayesian considerations

It is convenient to reparameterize the distributions so that the expected mean of the prior is a single parameter: Let


 \begin{align} \pi(\theta|\mu,M) & = \operatorname{Beta}(\mu,M) \\
                                 & = \frac{\Gamma(M)}{\Gamma(\mu M)\Gamma(M(1-\mu))} 
                                     \theta^{M\mu-1}(1-\theta)^{M(1-\mu)-1}
 \end{align}

where

 
 \begin{align}
   \mu &= \frac{\alpha}{\alpha%2B\beta}  \\
     M &= \alpha%2B\beta
 \end{align}

so that

 
 \begin{align}
  \operatorname{E}(\theta|\mu,M)   & = \mu \\ 
  \operatorname{Var}(\theta|\mu,M) & = \frac{\mu(1-\mu)}{M%2B1}.
 \end{align}

The posterior distribution ρ(θ|k) is also a beta distribution:

 
\begin{align} \rho(\theta|k) & \propto \ell(k|\theta)\pi(\theta|\mu,M) \\
                             & = \operatorname{Beta}(k%2BM \mu, n-k%2BM(1- \mu) ) \\
                             & = \frac{\Gamma(M)}
                                      {\Gamma(M\mu)\Gamma(M(1-\mu))}
                                 {n\choose k}\theta^{k%2BM\mu-1}(1-\theta)^{n-k%2BM(1-\mu)-1}
 \end{align}

And


\operatorname{E}(\theta|k) = \frac{k%2BM \mu}{n%2BM}.

while the marginal distribution m(k|μ, M) is given by


 \begin{align}  m(k|\mu,M) & = \int_0^1 l(k|\theta)\pi(\theta|\mu, M) \, d\theta \\

                           & = \frac{\Gamma(M)}
                                    {\Gamma(M\mu)\Gamma(M(1-\mu))}
                               {n\choose k} 
                               \int_{0}^{1} \theta^{k%2BM\mu-1}(1-\theta)^{n-k%2BM(1-\mu)-1} d\theta \\
                           & = \frac{\Gamma(M)}{\Gamma(M\mu)\Gamma(M(1-\mu))}
                               {n\choose k} 
                               \frac{\Gamma(k%2BM\mu)\Gamma(n-k%2BM(1-\mu))}{\Gamma(n%2BM)}.
 \end{align}

Because the marginal is a complex, non-linear function of Gamma and Digamma functions, it is quite difficult to obtain a marginal maximum likelihood estimate (MMLE) for the mean and variance. Instead, we use the method of iterated expectations to find the expected value of the marginal moments.

Let us write our model as a two-stage compound sampling model. Let ki be the number of success out of ni trials for event i:

 
 \begin{align} 
   k_i & \sim \operatorname{Bin}(n_i, \theta_i) \\
         \theta_i & \sim \operatorname{Beta}(\mu,M),\ \mathrm{i.i.d.}
 \end{align}

We can find iterated moment estimates for the mean and variance using the moments for the distributions in the two-stage model:

\operatorname{E}\left(\frac{k}{n}\right) = \operatorname{E}\left[\operatorname{E}\left(\left.\frac{k}{n}\right|\theta\right)\right] = \operatorname{E}(\theta) = \mu

 \begin{align} 
   \operatorname{var}\left(\frac{k}{n}\right) & = 
    \operatorname{E}\left[\operatorname{var}\left(\left.\frac{k}{n}\right|\theta\right)\right] %2B 
    \operatorname{var}\left[\operatorname{E}\left(\left.\frac{k}{n}\right|\theta\right)\right] \\
                                              & = 
   \operatorname{E}\left[\left(\left.\frac{1}{n}\right)\theta(1-\theta)\right|\mu,M\right] %2B 
    \operatorname{var}\left(\theta|\mu,M\right) \\
                                              & = 
   \frac{1}{n}\left(\mu(1-\mu)\right) %2B \frac{n_{i}-1}{n_{i}}\frac{(\mu(1-\mu))}{M%2B1} \\
                                              & = 
   \frac{\mu(1-\mu)}{n}\left(1%2B\frac{n-1}{M%2B1}\right).
 \end{align}

(Here we have used the law of total expectation and the law of total variance.)

We want point estimates for \mu and M. The estimated mean \hat{\mu} is calculated from the sample

\hat{\mu} = \frac{\sum_{i=1}^N k_i }{\sum_{i=1}^N n_i }.

The estimate of the hyperparameter M is obtained using the moment estimates for the variance of the two-stage model:


   s^2 = \frac{1}{N} \sum_{i=1}^N \operatorname{var}\left(\frac{k_{i}}{n_{i}} \right) 
         = \frac{1}{N} \sum_{i=1}^N \frac{\hat{\mu}(1-\hat{\mu})}{n_i}
             \left[1%2B\frac{n_i-1}{\widehat{M}%2B1}\right]

Solving:

\widehat{M} = \frac{\hat{\mu}(1-\hat{\mu}) - s^2}{s^2 - \frac{\hat{\mu}(1 - \hat{\mu})}{N}\sum_{i=1}^N 1/n_i},

where

s^2 = \frac{N \sum_{i=1}^N n_i (\hat{\theta_i} - \hat{\mu})^2 }
                  {(N-1)\sum_{i=1}^N n_i }.

Since we now have parameter point estimates, \hat{\mu} and \widehat{M}, for the underlying distribution, we would like to find a point estimate \tilde{\theta}_i = k_i/n_i for the probability of success for event i. This is the weighted average of the event estimate \hat{\theta_i} and \hat{\mu}. Given our point estimates for the prior, we may now plug in these values to find a point estimate for the posterior

\tilde{\theta_i} = E(\theta|k_i) = \frac{k_i %2B \widehat{M}\hat{\mu}}{n_i%2B\widehat{M}} = \frac{\widehat{M}}{n_i%2B\widehat{M}}\hat{\mu} %2B \frac{n_i}{n_i%2B\widehat{M}}\frac{k_i}{n_i}.

Shrinkage factors

We may write the posterior estimate as a weighted average:

\tilde{\theta}_i = \hat{B}_i\,\hat{\mu} %2B (1-\hat{B}_i)\hat{\theta}_i

where \hat{B}_i is called the shrinkage factor.

\hat{B_i} = \frac{\hat{M}}{\hat{M}%2Bn_i}

Related distributions

See also

References

* Minka, Thomas P. (2003). Estimating a Dirichlet distribution. Microsoft Technical Report.

External links